Aegean Sea
Advancing Marine Bioacoustics with Deep Generative Models: A Hybrid Augmentation Strategy for Southern Resident Killer Whale Detection
Padovese, Bruno, Frazao, Fabio, Dowd, Michael, Joy, Ruth
Automated detection and classification of marine mammals vocalizations is critical for conservation and management efforts but is hindered by limited annotated datasets and the acoustic complexity of real-world marine environments. Data augmentation has proven to be an effective strategy to address this limitation by increasing dataset diversity and improving model generalization without requiring additional field data. However, most augmentation techniques used to date rely on effective but relatively simple transformations, leaving open the question of whether deep generative models can provide additional benefits. In this study, we evaluate the potential of deep generative for data augmentation in marine mammal call detection including: Variational Autoencoders, Generative Adversarial Networks, and Denoising Diffusion Probabilistic Models. Using Southern Resident Killer Whale (Orcinus orca) vocalizations from two long-term hydrophone deployments in the Salish Sea, we compare these approaches against traditional augmentation methods such as time-shifting and vocalization masking. While all generative approaches improved classification performance relative to the baseline, diffusion-based augmentation yielded the highest recall (0.87) and overall F1-score (0.75). A hybrid strategy combining generative-based synthesis with traditional methods achieved the best overall performance with an F1-score of 0.81. We hope this study encourages further exploration of deep generative models as complementary augmentation strategies to advance acoustic monitoring of threatened marine mammal populations.
- North America > Canada > British Columbia (0.05)
- North America > United States > Alaska (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- (7 more...)
RePro: Training Language Models to Faithfully Recycle the Web for Pretraining
High-quality pretraining data is the fossil fuel of large language models (LLMs), yet its reserves are running low for frontier models. In this paper, we introduce RePro, a novel web recycling method that trains a relatively small LM with reinforcement learning to generate effective and faithful rephrasings of pretraining data. Specifically, we design one quality reward and three faithfulness rewards, optimizing the LM rephraser to convert organic data into high-quality rephrasings while maintaining its core semantics and structure. In our experiment, we train a 4B rephraser to recycle 72B tokens sampled from DCLM-RefinedWeb. Pretraining results on 400M and 1.4B models demonstrate that RePro delivers 4.7%-14.0% relative accuracy gains over organic-only baseline on 22 downstream tasks. RePro also outperforms ReWire, the state-of-the-art web recycling method that prompts a 70B rephraser, as well as the organic baseline with a 4x larger data pool. Experiments with different amounts of recycled data highlight that RePro improves organic data efficiency by 2-3x. Individual and distributional analyses validate that RePro preserves more critical information and faithfully reflects the characteristics of organic data compared to prompting-based methods. Together, these results show that RePro provides an efficient and controllable path to effectively harness the fossil fuel of LLM pretraining. We open-source our code, rephraser, and recycled data at https://github.com/cxcscmu/RePro.
- Oceania > New Zealand (0.15)
- Atlantic Ocean > Mediterranean Sea > Aegean Sea > Sea of Marmara > Dardanelles (0.05)
- Asia > Middle East > Jordan (0.04)
- (7 more...)
MedFormer: a data-driven model for forecasting the Mediterranean Sea
Epicoco, Italo, Donno, Davide, Accarino, Gabriele, Norberti, Simone, Grandi, Alessandro, Giurato, Michele, McAdam, Ronan, Elia, Donatello, Clementi, Emanuela, Nassisi, Paola, Scoccimarro, Enrico, Coppini, Giovanni, Gualdi, Silvio, Aloisio, Giovanni, Masina, Simona, Boccaletti, Giulio, Navarra, Antonio
Accurate ocean forecasting is essential for supporting a wide range of marine applications. Recent advances in artificial intelligence have highlighted the potential of data-driven models to outperform traditional numerical approaches, particularly in atmospheric weather forecasting. However, extending these methods to ocean systems remains challenging due to their inherently slower dynamics and complex boundary conditions. In this work, we present MedFormer, a fully data-driven deep learning model specifically designed for medium-range ocean forecasting in the Mediterranean Sea. MedFormer is based on a U-Net architecture augmented with 3D attention mechanisms and operates at a high horizontal resolution of 1/24°. The model is trained on 20 years of daily ocean reanalysis data and fine-tuned with high-resolution operational analyses. It generates 9-day forecasts using an autoregressive strategy. The model leverages both historical ocean states and atmospheric forcings, making it well-suited for operational use. We benchmark MedFormer against the state-of-the-art Mediterranean Forecasting System (MedFS), developed at Euro-Mediterranean Center on Climate Change (CMCC), using both analysis data and independent observations. The forecast skills, evaluated with the Root Mean Squared Difference and the Anomaly Correlation Coefficient, indicate that MedFormer consistently outperforms MedFS across key 3D ocean variables. These findings underscore the potential of data-driven approaches like MedFormer to complement, or even surpass, traditional numerical ocean forecasting systems in both accuracy and computational efficiency.
- Atlantic Ocean > Mediterranean Sea > Ionian Sea (0.05)
- Atlantic Ocean > Mediterranean Sea > Adriatic Sea (0.05)
- North America > United States > New York (0.05)
- (6 more...)
Ovis2.5 Technical Report
Lu, Shiyin, Li, Yang, Xia, Yu, Hu, Yuwei, Zhao, Shanshan, Ma, Yanqing, Wei, Zhichao, Li, Yinglun, Duan, Lunhao, Zhao, Jianshan, Han, Yuxuan, Li, Haijun, Chen, Wanying, Tang, Junke, Hou, Chengkun, Du, Zhixing, Zhou, Tianli, Zhang, Wenjie, Ding, Huping, Li, Jiahe, Li, Wen, Hu, Gui, Gu, Yiliang, Yang, Siran, Wang, Jiamang, Sun, Hailong, Wang, Yibo, Sun, Hui, Huang, Jinlong, He, Yuping, Shi, Shengze, Zhang, Weihong, Zheng, Guodong, Jiang, Junpeng, Gao, Sensen, Wu, Yi-Feng, Chen, Sijia, Chen, Yuhui, Chen, Qing-Guo, Xu, Zhao, Luo, Weihua, Zhang, Kaifu
We present Ovis2.5, a successor to Ovis2 designed for native-resolution visual perception and strong multimodal reasoning. Ovis2.5 integrates a native-resolution vision transformer that processes images at their native, variable resolutions, avoiding the degradation from fixed-resolution tiling and preserving both fine detail and global layout -- crucial for visually dense content like complex charts. To strengthen reasoning, we train the model to move beyond linear chain-of-thought and perform reflection -- including self-checking and revision. This advanced capability is exposed as an optional "thinking mode" at inference time, allowing users to trade latency for enhanced accuracy on difficult inputs. The model is trained via a comprehensive five-phase curriculum that progressively builds its skills. The process begins with foundational visual and multimodal pretraining, advances through large-scale instruction tuning, and culminates in alignment and reasoning enhancement using DPO and GRPO. To scale these upgrades efficiently, we employ multimodal data packing and hybrid parallelism, yielding a significant end-to-end speedup. We release two open-source models: Ovis2.5-9B and Ovis2.5-2B. The latter continues the "small model, big performance" philosophy of Ovis2, making it ideal for resource-constrained, on-device scenarios. On the OpenCompass multimodal leaderboard, Ovis2.5-9B averages 78.3, marking a substantial improvement over its predecessor, Ovis2-8B, and achieving state-of-the-art results among open-source MLLMs in the sub-40B parameter range; Ovis2.5-2B scores 73.9, establishing SOTA for its size. Beyond aggregate scores, Ovis2.5 achieves leading results on STEM benchmarks, exhibits strong capabilities on grounding and video tasks, and achieves open-source SOTA at its scale for complex chart analysis.
- North America > United States > South Carolina (0.04)
- Europe > Middle East > Republic of Türkiye > Istanbul Province > Istanbul (0.04)
- Asia > Middle East > Republic of Türkiye > Istanbul Province > Istanbul (0.04)
- (14 more...)
- Research Report (0.64)
- Workflow (0.49)
- Health & Medicine (1.00)
- Leisure & Entertainment (0.93)
- Media > Music (0.68)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.93)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.69)
DipLLM: Fine-Tuning LLM for Strategic Decision-making in Diplomacy
Xu, Kaixuan, Chai, Jiajun, Li, Sicheng, Fu, Yuqian, Zhu, Yuanheng, Zhao, Dongbin
Diplomacy is a complex multiplayer game that requires both cooperation and competition, posing significant challenges for AI systems. Traditional methods rely on equilibrium search to generate extensive game data for training, which demands substantial computational resources. Large Language Models (LLMs) offer a promising alternative, leveraging pre-trained knowledge to achieve strong performance with relatively small-scale fine-tuning. However, applying LLMs to Diplomacy remains challenging due to the exponential growth of possible action combinations and the intricate strategic interactions among players. To address this challenge, we propose DipLLM, a fine-tuned LLM-based agent that learns equilibrium policies for Diplomacy. DipLLM employs an autoregressive factorization framework to simplify the complex task of multi-unit action assignment into a sequence of unit-level decisions. By defining an equilibrium policy within this framework as the learning objective, we fine-tune the model using only 1.5% of the data required by the state-of-the-art Cicero model, surpassing its performance. Our results demonstrate the potential of fine-tuned LLMs for tackling complex strategic decision-making in multiplayer games.
DualToken: Towards Unifying Visual Understanding and Generation with Dual Visual Vocabularies
Song, Wei, Wang, Yuran, Song, Zijia, Li, Yadong, Sun, Haoze, Chen, Weipeng, Zhou, Zenan, Xu, Jianhua, Wang, Jiaqi, Yu, Kaicheng
The differing representation spaces required for visual understanding and generation pose a challenge in unifying them within the autoregressive paradigm of large language models. A vision tokenizer trained for reconstruction excels at capturing low-level perceptual details, making it well-suited for visual generation but lacking high-level semantic representations for understanding tasks. Conversely, a vision encoder trained via contrastive learning aligns well with language but struggles to decode back into the pixel space for generation tasks. To bridge this gap, we propose DualToken, a method that unifies representations for both understanding and generation within a single tokenizer. However, directly integrating reconstruction and semantic objectives in a single tokenizer creates conflicts, leading to degraded performance in both reconstruction quality and semantic performance. Instead of forcing a single codebook to handle both semantic and perceptual information, DualToken disentangles them by introducing separate codebooks for high and low-level features, effectively transforming their inherent conflict into a synergistic relationship. As a result, DualToken achieves state-of-the-art performance in both reconstruction and semantic tasks while demonstrating remarkable effectiveness in downstream MLLM understanding and generation tasks. Notably, we also show that DualToken, as a unified tokenizer, surpasses the naive combination of two distinct types vision encoders, providing superior performance within a unified MLLM.
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.91)
- Information Technology > Artificial Intelligence > Natural Language > Text Processing (0.67)
FLP-XR: Future Location Prediction on Extreme Scale Maritime Data in Real-time
Theodoropoulos, George S., Patakis, Andreas, Tritsarolis, Andreas, Theodoridis, Yannis
Movements of maritime vessels are inherently complex and challenging to model due to the dynamic and often unpredictable nature of maritime operations. Even within structured maritime environments, such as shipping lanes and port approaches, where vessels adhere to navigational rules and predefined sea routes, uncovering underlying patterns is far from trivial. The necessity for accurate modeling of the mobility of maritime vessels arises from the numerous applications it serves, including risk assessment for collision avoidance, optimization of shipping routes, and efficient port management. This paper introduces FLP-XR, a model that leverages maritime mobility data to construct a robust framework that offers precise predictions while ensuring extremely fast training and inference capabilities. We demonstrate the efficiency of our approach through an extensive experimental study using three real-world AIS datasets. According to the experimental results, FLP-XR outperforms the current state-of-the-art in many cases, whereas it performs 2-3 orders of magnitude faster in terms of training and inference.
- Europe > Greece (0.05)
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.04)
- Europe > Netherlands > South Holland > Rotterdam (0.04)
- (4 more...)
- Information Technology (1.00)
- Transportation > Marine (0.86)
- Transportation > Freight & Logistics Services > Shipping (0.54)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Architecture > Real Time Systems (0.87)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.30)
Regional Ocean Forecasting with Hierarchical Graph Neural Networks
Holmberg, Daniel, Clementi, Emanuela, Roos, Teemu
Accurate ocean forecasting systems are vital for understanding marine dynamics, which play a crucial role in environmental management and climate adaptation strategies. Traditional numerical solvers, while effective, are computationally expensive and time-consuming. Recent advancements in machine learning have revolutionized weather forecasting, offering fast and energy-efficient alternatives. Building on these advancements, we introduce SeaCast, a neural network designed for high-resolution, medium-range ocean forecasting. SeaCast employs a graph-based framework to effectively handle the complex geometry of ocean grids and integrates external forcing data tailored to the regional ocean context. Our approach is validated through experiments at a high spatial resolution using the operational numerical model of the Mediterranean Sea provided by the Copernicus Marine Service, along with both numerical and data-driven atmospheric forcings.
- Europe > Finland > Uusimaa > Helsinki (0.04)
- Europe > Gibraltar (0.04)
- Atlantic Ocean > Mediterranean Sea > Aegean Sea > Sea of Marmara > Dardanelles (0.04)
- (13 more...)
Few-Shot Joint Multimodal Entity-Relation Extraction via Knowledge-Enhanced Cross-modal Prompt Model
Yuan, Li, Cai, Yi, Huang, Junsheng
Joint Multimodal Entity-Relation Extraction (JMERE) is a challenging task that aims to extract entities and their relations from text-image pairs in social media posts. Existing methods for JMERE require large amounts of labeled data. However, gathering and annotating fine-grained multimodal data for JMERE poses significant challenges. Initially, we construct diverse and comprehensive multimodal few-shot datasets fitted to the original data distribution. To address the insufficient information in the few-shot setting, we introduce the \textbf{K}nowledge-\textbf{E}nhanced \textbf{C}ross-modal \textbf{P}rompt \textbf{M}odel (KECPM) for JMERE. This method can effectively address the problem of insufficient information in the few-shot setting by guiding a large language model to generate supplementary background knowledge. Our proposed method comprises two stages: (1) a knowledge ingestion stage that dynamically formulates prompts based on semantic similarity guide ChatGPT generating relevant knowledge and employs self-reflection to refine the knowledge; (2) a knowledge-enhanced language model stage that merges the auxiliary knowledge with the original input and utilizes a transformer-based model to align with JMERE's required output format. We extensively evaluate our approach on a few-shot dataset derived from the JMERE dataset, demonstrating its superiority over strong baselines in terms of both micro and macro F$_1$ scores. Additionally, we present qualitative analyses and case studies to elucidate the effectiveness of our model.
- Government > Regional Government > North America Government > United States Government (0.46)
- Leisure & Entertainment > Sports > Soccer (0.46)
Development of an AI Anti-Bullying System Using Large Language Model Key Topic Detection
Tassava, Matthew, Kolodjski, Cameron, Milbrath, Jordan, Bishop, Adorah, Flanders, Nathan, Fetsch, Robbie, Hanson, Danielle, Straub, Jeremy
It has become a pronounced problem due to the increasing ubiquity of online platforms that provide a means to conduct it. A significant amount of this cyberbullying is conducted by and targets teenagers. It is difficult for teenage students to shut themselves off from the digital world in which the cyberbullying is taking place. Given how entrenched the use of digital apps is by today's youth, and the pronounced consequences of it - including victim self-harm, in some cases - cyberbullying is at least as much of a threat as physical bullying. Additionally, because of the obfuscation caused by the online environment, authorities (such as parents, teachers and law enforcement) may have difficulty determining what has occurred and who the actors participating are.
- Africa (0.04)
- Oceania > New Zealand (0.04)
- Europe > Italy > Tuscany (0.04)
- (9 more...)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- Education > Educational Setting > K-12 Education (0.45)